1 Youth Risk Behavior Surveillance

Every two years, the Centers for Disease Control and Prevention conduct the Youth Risk Behavior Surveillance System (YRBSS) survey, where it takes data from high schoolers (9th through 12th grade), to analyze health patterns. We will work with a selected group of variables from a random sample of observations during one of the years the YRBSS was conducted.

1.1 Load the data

This data is part of the openintro textbook and we can load and inspect it. There are observations on 13 different variables, some categorical and some numerical. The meaning of each variable can be found by bringing up the help file:

?yrbss
data(yrbss)
glimpse(yrbss)
## Rows: 13,583
## Columns: 13
## $ age                      <int> 14, 14, 15, 15, 15, 15, 15, 14, 15, 15, 15, 1…
## $ gender                   <chr> "female", "female", "female", "female", "fema…
## $ grade                    <chr> "9", "9", "9", "9", "9", "9", "9", "9", "9", …
## $ hispanic                 <chr> "not", "not", "hispanic", "not", "not", "not"…
## $ race                     <chr> "Black or African American", "Black or Africa…
## $ height                   <dbl> NA, NA, 1.73, 1.60, 1.50, 1.57, 1.65, 1.88, 1…
## $ weight                   <dbl> NA, NA, 84.4, 55.8, 46.7, 67.1, 131.5, 71.2, …
## $ helmet_12m               <chr> "never", "never", "never", "never", "did not …
## $ text_while_driving_30d   <chr> "0", NA, "30", "0", "did not drive", "did not…
## $ physically_active_7d     <int> 4, 2, 7, 0, 2, 1, 4, 4, 5, 0, 0, 0, 4, 7, 7, …
## $ hours_tv_per_school_day  <chr> "5+", "5+", "5+", "2", "3", "5+", "5+", "5+",…
## $ strength_training_7d     <int> 0, 0, 0, 0, 1, 0, 2, 0, 3, 0, 3, 0, 0, 7, 7, …
## $ school_night_hours_sleep <chr> "8", "6", "<5", "6", "9", "8", "9", "6", "<5"…

Now we summarize the statistics of numerical variables, and create a very rough histogram.

skim(yrbss)
Data summary
Name yrbss
Number of rows 13583
Number of columns 13
_______________________
Column type frequency:
character 8
numeric 5
________________________
Group variables None

Variable type: character

skim_variable n_missing complete_rate min max empty n_unique whitespace
gender 12 1.00 4 6 0 2 0
grade 79 0.99 1 5 0 5 0
hispanic 231 0.98 3 8 0 2 0
race 2805 0.79 5 41 0 5 0
helmet_12m 311 0.98 5 12 0 6 0
text_while_driving_30d 918 0.93 1 13 0 8 0
hours_tv_per_school_day 338 0.98 1 12 0 7 0
school_night_hours_sleep 1248 0.91 1 3 0 7 0

Variable type: numeric

skim_variable n_missing complete_rate mean sd p0 p25 p50 p75 p100 hist
age 77 0.99 16.16 1.26 12.00 15.0 16.00 17.00 18.00 ▁▂▅▅▇
height 1004 0.93 1.69 0.10 1.27 1.6 1.68 1.78 2.11 ▁▅▇▃▁
weight 1004 0.93 67.91 16.90 29.94 56.2 64.41 76.20 180.99 ▆▇▂▁▁
physically_active_7d 273 0.98 3.90 2.56 0.00 2.0 4.00 7.00 7.00 ▆▂▅▃▇
strength_training_7d 1176 0.91 2.95 2.58 0.00 0.0 3.00 5.00 7.00 ▇▂▅▂▅

1.2 Exploratory Data Analysis

We first start with analyzing the weight of participants in kilograms. From the histogram and summary statistics below we can see the distribution of weights is positively skewed. We can see that the distribution is right skewed and there are 1004 missing values.

# stats
summary(yrbss$weight)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.    NA's 
##      30      56      64      68      76     181    1004
# plot histogram
yrbss %>% 
  filter(!is.na(weight)) %>% 
  ggplot(aes(x=weight))+
  geom_histogram(bins=30)+
  NULL

Next, consider the possible relationship between a high schooler’s weight and their physical activity. Next we plot the data to quickly visualize trends, identify strong associations, and develop research questions.

We create a new variable in the dataframe yrbss, called physical_3plus , which will be yes if they are physically active for at least 3 days a week, and no otherwise.

yrbss <- yrbss %>% 
  mutate(physical_3plus = case_when(
    physically_active_7d >= 3 ~ "yes",
    physically_active_7d < 3 ~ "no",
    T ~ "NA"
  )) %>% 
  filter(physical_3plus!="NA") # remove null values



# group by and summarise
yrbss_prop <- yrbss %>% 
  group_by(physical_3plus) %>% 
  summarise(n = n()) %>% 
  mutate(prop= n/sum(n))
  
# another way: count
# yrbss_prop <- yrbss %>%
#  count(physical_3plus, sort=TRUE) %>%
#  mutate(prop= n/sum(n))
yrbss_prop
## # A tibble: 2 × 3
##   physical_3plus     n  prop
##   <chr>          <int> <dbl>
## 1 no              4404 0.331
## 2 yes             8906 0.669

1.2.1 Calculating confidence interval

# notes: here std_error is the standard deviation of the sample mean

not_prop <- yrbss_prop %>% 
  filter(physical_3plus=="no") %>%
  pull("prop")

not_n <- yrbss_prop %>% 
  filter(physical_3plus=="no") %>%
  pull("n")

# estimation of sd
std_error <- sqrt(not_prop * (1-not_prop) / (sum(yrbss_prop$n) ))

# with unknown population sd, use t distribution 1.960503
t_critical <- qt(0.975, not_n - 1)

margin_of_error <- t_critical * std_error

# ci
phy_3plus_low <- not_prop - margin_of_error
phy_3plus_high <- not_prop + margin_of_error

print(sprintf("95%% confidence interval is [%f,%f]",phy_3plus_low,phy_3plus_high))
## [1] "95% confidence interval is [0.322883,0.338875]"
  • 95% confidence interval for the population proportion of high schools that are NOT active 3 or more days per week is [0.322883,0.338875]

1.2.2 Boxplot

Next we make a boxplot of physical_3plus vs. weight to check the relationship between these two variables.

yrbss %>%
  filter(physical_3plus!="NA") %>% 
  ggplot(aes(x = physical_3plus , y = weight)) +
  geom_boxplot()+
  labs(title = "Boxplot of Active for at least 3 days vs Weight",
       x = "Active at least 3 days",
       y = "Weight")+
  NULL

Conclusion:

  • No significant relationship can be identified. We expected the more students exercise the lighter weight they have.

  • But we can see that the median weight of the sample who are physically active for at least three days is greater than the median of the sample who are active for lesser than three days. This may be because of higher weight of muscle or bone due to working out/exercising.

1.3 Confidence Interval (Difference of means)

Boxplots show how the medians of the two distributions compare, but we can also compare the means of the distributions using either a confidence interval or a hypothesis test.

yrbss_stats <- favstats(weight~physical_3plus, data=yrbss,na.rm = T)

# use formulas
yrbss_stats_alt <- yrbss %>% 
  group_by(physical_3plus) %>% 
  summarise(avg_weight = mean(weight,na.rm=T),
            sd_weight_mean = sd(weight,na.rm=T),
            n=n())

#approximate by 1.96
t_critical <- 1.96 # qt(0.975, ) # calculate df with Welch-Satterhwaite formula

no_ci_lower <- 66.674 - t_critical*17.638/sqrt(sum(yrbss_stats_alt$n))
no_ci_higher <- 66.674 + t_critical*17.638/sqrt(sum(yrbss_stats_alt$n))
print(sprintf("weights of 'no': 95%% confidence interval is [%f,%f]",no_ci_lower,no_ci_higher))
## [1] "weights of 'no': 95% confidence interval is [66.374349,66.973651]"
yes_ci_lower <- 68.448 - t_critical*16.478/sqrt(sum(yrbss_stats_alt$n))
yes_ci_higher <- 68.448 + t_critical*16.478/sqrt(sum(yrbss_stats_alt$n))
print(sprintf("weights of 'yes': 95%% confidence interval is [%f,%f]",yes_ci_lower,yes_ci_higher))
## [1] "weights of 'yes': 95% confidence interval is [68.168056,68.727944]"
  • There is an observed difference of about 1.77kg (68.44 - 66.67), and we notice that the two confidence intervals do not overlap. It seems that the difference is at least 95% statistically significant. Let us also conduct a hypothesis test.

1.4 Hypothesis test with formula (Difference of means)

Null hypothesis \(H_0:\bar{weight}_{>=3h}-\bar{weight}_{<3h}=0\)

Alternative hypothesis \(H_1:\bar{weight}_{>=3h}-\bar{weight}_{<3h}\neq0\)

t.test(weight ~ physical_3plus, data = yrbss) # assume different variance
## 
##  Welch Two Sample t-test
## 
## data:  weight by physical_3plus
## t = -5, df = 7479, p-value = 9e-08
## alternative hypothesis: true difference in means between group no and group yes is not equal to 0
## 95 percent confidence interval:
##  -2.42 -1.12
## sample estimates:
##  mean in group no mean in group yes 
##              66.7              68.4

1.5 Hypothesis test with infer

Next, we use hypothesize for conducting hypothesis tests.

First, we need to initialize the test, which we will save as obs_diff.

obs_diff <- yrbss %>%
  specify(weight ~ physical_3plus) %>%
  calculate(stat = "diff in means", order = c("yes", "no"))

obs_diff
## Response: weight (numeric)
## Explanatory: physical_3plus (factor)
## # A tibble: 1 × 1
##    stat
##   <dbl>
## 1  1.77

The statistic we are searching for is the difference in means, with the order being yes - no != 0.

After initializing the test, we will simulate the test on the null distribution, which we will save as null.

null_dist <- yrbss %>%
  # specify variables
  specify(weight ~ physical_3plus) %>%
  
  # assume independence, i.e, there is no difference
  hypothesize(null = "independence") %>%
  
  # generate 1000 reps, of type "permute"
  generate(reps = 1000, type = "permute") %>%
  
  # calculate statistic of difference, namely "diff in means"
  calculate(stat = "diff in means", order = c("yes", "no"))

We can visualize this null distribution with the following code:

ggplot(data = null_dist, aes(x = stat)) +
  geom_histogram()+
  NULL

Now that the test is initialized and the null distribution formed, we will visualise to see how many of these null permutations have a difference of at least obs_stat of 1.77. We will also calculate the p-value for the hypothesis test using the function infer::get_p_value().

null_dist %>% visualize() +
  shade_p_value(obs_stat = obs_diff, direction = "two-sided")

null_dist %>%
  get_p_value(obs_stat = obs_diff, direction = "two_sided")
## # A tibble: 1 × 1
##   p_value
##     <dbl>
## 1       0
  • In 1000 permutations, there is no point has a difference of at least obs_stat of 1.77. The p-value here is given by 0, but this result is an approximation based on the number of reps chosen in the generate() step.

  • Since the p_value is close to 0, we will reject the null hypothesis.

2 IMDB ratings: Differences between directors

We would like to explore whether the mean IMDB rating for Steven Spielberg and Tim Burton are the same or not. First, we tried to replicate the graph showing the confidence intervals for the mean ratings of these two directors and as you can see they overlap.

To replicate the graph, we first filtered out the movies of these two directors, then group by the director and summarise the mean, std, no. of movies, to calculate the standard error, t_critical and margin of error, to get the confidence interval of the mean ratings of the movies of the two directors.

Then we plotted the data and added error bars to the mean ratings, and indicate the overlap of confidence intervals.

movies <- read_csv(here::here('data', 'movies.csv'))
# modify the dataset for plot
movies_comp <- movies %>% 
  filter(director %in% c("Steven Spielberg","Tim Burton")) %>%
  group_by(director) %>% 
  summarise(mean_rating = mean(rating),
            std_rating = sd(rating),
            count=n(),
            se_rating = std_rating / sqrt(count),
            t_critical = qt(0.975, count-1),
            margin_of_error = t_critical * se_rating,
            rating_low = mean_rating - margin_of_error,
            rating_high = mean_rating + margin_of_error) %>% 
  arrange(desc(mean_rating))

# plot the data, add error bars and label the datapoint
movies_comp %>% 
  ggplot(mapping = aes(x=mean_rating,y=fct_relevel(director, levels=c("Tim Burton","Steven Spielberg"))))+
  
  geom_point(size = 5, mapping = aes(color = director))+
  geom_errorbar(aes(xmin = rating_low, xmax = rating_high, color = director), width = 0.1,size = 1.5)+
  ggrepel::geom_text_repel(aes(x=mean_rating, label = round(mean_rating, 2)), size=4, position = position_dodge(0.9)) +
  ggrepel::geom_text_repel(aes(x=rating_low, label = round(rating_low, 2)),size = 3, position = position_dodge(0.9)) +
  ggrepel::geom_text_repel(aes(x=rating_high, label = round(rating_high, 2)),size = 3, position = position_dodge(0.9)) +

# plot the grey rectangle to show the overlap of confidence intervals
  geom_rect(xmin = 7.27,xmax = 7.33, ymin=0, ymax=Inf, alpha=0.4)+
  labs(title = "Do Spielberg and Burton have the same mean IMDB ratings?",
       subtitle = "95% confidence interval overlap",
       x = "Mean IMDB Rating",
       y = "") +
  NULL

In addition, we ran a hypothesis test using both the t.test command and the infer package to simulate from a null distribution, where we assume zero difference between the two.

First of all, the Null hypothesis and the alternaltive hypothesis are:

  • H0: Spielburg and Burton have the same mean IMDB ratings for the movies they direct
  • H1: Spielburg has a different mean IMDB ratings from Burton
# movies_comp

# calculate p_value by hand
t_stat =  (7.57-6.93)/sqrt(0.695*0.695/23 + 0.749*0.749/16)
# p_value = 2*0.0035
p_value = 2*(1-pt(2.7,37))
paste("t_stat:",t_stat)
## [1] "t_stat: 2.70295796381348"
paste("p_value:",p_value)
## [1] "p_value: 0.0103936041036872"

As we can see from the calculation above, the t_stat by hand:t_stat = 2.7, p_value = 0.01, indicating that H0 should be rejected.

Next we use the t.test command:

movies <- read_csv(here::here("data", "movies.csv"))
glimpse(movies)
## Rows: 2,961
## Columns: 11
## $ title               <chr> "Avatar", "Titanic", "Jurassic World", "The Avenge…
## $ genre               <chr> "Action", "Drama", "Action", "Action", "Action", "…
## $ director            <chr> "James Cameron", "James Cameron", "Colin Trevorrow…
## $ year                <dbl> 2009, 1997, 2015, 2012, 2008, 1999, 1977, 2015, 20…
## $ duration            <dbl> 178, 194, 124, 173, 152, 136, 125, 141, 164, 93, 1…
## $ gross               <dbl> 7.61e+08, 6.59e+08, 6.52e+08, 6.23e+08, 5.33e+08, …
## $ budget              <dbl> 2.37e+08, 2.00e+08, 1.50e+08, 2.20e+08, 1.85e+08, …
## $ cast_facebook_likes <dbl> 4834, 45223, 8458, 87697, 57802, 37723, 13485, 920…
## $ votes               <dbl> 886204, 793059, 418214, 995415, 1676169, 534658, 9…
## $ reviews             <dbl> 3777, 2843, 1934, 2425, 5312, 3917, 1752, 1752, 35…
## $ rating              <dbl> 7.9, 7.7, 7.0, 8.1, 9.0, 6.5, 8.7, 7.5, 8.5, 7.2, …
movies_hypo <- movies %>%
  filter(director %in% c("Steven Spielberg","Tim Burton")) 

t.test(rating ~ director, data = movies_hypo)
## 
##  Welch Two Sample t-test
## 
## data:  rating by director
## t = 3, df = 31, p-value = 0.01
## alternative hypothesis: true difference in means between group Steven Spielberg and group Tim Burton is not equal to 0
## 95 percent confidence interval:
##  0.16 1.13
## sample estimates:
## mean in group Steven Spielberg       mean in group Tim Burton 
##                           7.57                           6.93
  • From the t.test, we get the t statistics at 2.7 and the p-value at 0.01, which deliver the same message that H0 should be rejected.

Finally, we run the simulation from a null distribution where we hypothesize that the meaning ratings for Speilburg and Burton are the same.

diff <- movies_hypo %>%
  specify(rating ~ director) %>%
  calculate(stat = "diff in means", order = c("Steven Spielberg", "Tim Burton"))

set.seed(1234)
null_dist_movies <- movies_hypo %>%
  # specify variables
  specify(rating ~ director) %>%
  
  # assume independence, i.e, there is no difference
  hypothesize(null = "independence") %>%
  
  # generate 1000 reps, of type "permute"
  generate(reps = 1000, type = "permute") %>%
  
  # calculate statistic of difference, namely "diff in means"
  calculate(stat = "diff in means", order = c("Steven Spielberg", "Tim Burton"))

null_dist_movies %>% 
  visualise()+
  shade_p_value(obs_stat = diff, direction = "two-sided")

p_value <- null_dist_movies %>% 
  get_pvalue(obs_stat = diff, direction="both")

p_value
## # A tibble: 1 × 1
##   p_value
##     <dbl>
## 1   0.008
  • The exercise gives a p value of 0.008.

  • Three methods deliver the same message that H0 should be rejected. We are 95% confident that Spielburg’s movies are rated higher on IMDB than Burton’s movies.

3 Omega Group plc- Pay Discrimination

At the last board meeting of Omega Group Plc., the headquarters of a large multinational company, the issue was raised that women were being discriminated in the company, in the sense that the salaries were not the same for male and female executives. A quick analysis of a sample of 50 employees (of which 24 men and 26 women) revealed that the average salary for men was about 8,700 higher than for women. This seemed like a considerable difference, so it was decided that a further analysis of the company salaries was warranted.

Now we need to carry out the analysis. The objective is to find out whether there is indeed a significant difference between the salaries of men and women, and whether the difference is due to discrimination or whether it is based on another, possibly valid, determining factor.

3.1 Loading the data

omega <- read_csv(here::here("data", "omega.csv"))
glimpse(omega) # examine the data frame
## Rows: 50
## Columns: 3
## $ salary     <dbl> 81894, 69517, 68589, 74881, 65598, 76840, 78800, 70033, 635…
## $ gender     <chr> "male", "male", "male", "male", "male", "male", "male", "ma…
## $ experience <dbl> 16, 25, 15, 33, 16, 19, 32, 34, 1, 44, 7, 14, 33, 19, 24, 3…

3.2 Relationship Salary - Gender

The data frame omega contains the salaries for the sample of 50 executives in the company. We want to investigate if there is a significant difference between the salaries of the male and female executives by performing the following analytics and see if they give same conclusions.

  • Confidence intervals . Hypothesis testing . Correlation analysis . Regression

First we calculate summary statistics on salary by gender, create and print a dataframe where, for each gender, we show the mean, SD, sample size, the t-critical, the SE, the margin of error, and the low/high endpoints of a 95% confidence interval

# Summary Statistics of salary by gender
mosaic::favstats (salary ~ gender, data=omega) %>% 

# Dataframe with two rows (male-female) and having as columns gender, mean, SD, sample size, 
# the t-critical value, the standard error, the margin of error,
# and the low/high endpoints of a 95% condifence interval
  mutate(t_critical = qt(0.975,n-1),
         sd_mean = sd/sqrt(n),
         margin_of_error = t_critical*sd_mean,
         ci_lower = mean-margin_of_error,
         ci_higher = mean+margin_of_error)
##   gender   min    Q1 median    Q3   max  mean   sd  n missing t_critical
## 1 female 47033 60338  64618 70033 78800 64543 7567 26       0       2.06
## 2   male 54768 68331  74675 78568 84576 73239 7463 24       0       2.07
##   sd_mean margin_of_error ci_lower ci_higher
## 1    1484            3056    61486     67599
## 2    1523            3151    70088     76390
  • Since two 95% confidence intervals do not overlap, we are at least 95% confident that the male have higher salary than the female on average.

We will also run a hypothesis testing, assuming as a null hypothesis that the mean difference in salaries is zero, or that, on average, men and women make the same amount of money.

# hypothesis testing using t.test() 
t.test(salary ~ gender, data=omega)
## 
##  Welch Two Sample t-test
## 
## data:  salary by gender
## t = -4, df = 48, p-value = 2e-04
## alternative hypothesis: true difference in means between group female and group male is not equal to 0
## 95 percent confidence interval:
##  -12973  -4420
## sample estimates:
## mean in group female   mean in group male 
##                64543                73239
# hypothesis testing using infer package
obs_diff <- omega %>%
  specify(salary ~ gender) %>%
  calculate(stat = "diff in means", order = c("female", "male"))

null_dist <- omega %>%
  # specify variables
  specify(salary ~ gender) %>%
  
  # assume independence, i.e, there is no difference
  hypothesize(null = "independence") %>%
  
  # generate 1000 reps, of type "permute"
  generate(reps = 1000, type = "permute") %>%
  
  # calculate statistic of difference, namely "diff in means"
  calculate(stat = "diff in means", order = c("female", "male"))

null_dist %>% 
  visualise()+
  shade_p_value(obs_stat = obs_diff, direction = "two-sided")

# Warning: Please be cautious in reporting a p-value of 0. This result is an approximation based on the number of `reps` chosen in the `generate()` step.
p_value <- null_dist %>% 
  get_pvalue(obs_stat = obs_diff, direction="both")
p_value
## # A tibble: 1 × 1
##   p_value
##     <dbl>
## 1       0
  • Here our null hypothesis is that the difference of omega between male and female is 0 and we get a p_value very close to 0. Therefore we reject the null hypothesis with 95% confidence. There is difference between the average male and female salary.

3.3 Relationship Experience - Gender?

At the board meeting, someone raised the issue that there was indeed a substantial difference between male and female salaries, but that this was attributable to other reasons such as differences in experience. A questionnaire send out to the 50 executives in the sample reveals that the average experience of the men is approximately 21 years, whereas the women only have about 7 years experience on average (see table below).

# Summary Statistics of salary by gender
favstats (experience ~ gender, data=omega)
##   gender min    Q1 median   Q3 max  mean    sd  n missing
## 1 female   0  0.25    3.0 14.0  29  7.38  8.51 26       0
## 2   male   1 15.75   19.5 31.2  44 21.12 10.92 24       0

Now we perform similar analyses as in the previous section to see if there is a significant difference between the experience of the male and female executives.

# Summary Statistics of salary by gender
mosaic::favstats (experience ~ gender, data=omega) %>% 

# Dataframe with two rows (male-female) and having as columns gender, mean, SD, sample size, 
# the t-critical value, the standard error, the margin of error,
# and the low/high endpoints of a 95% condifence interval
  mutate(t_critical = qt(0.975,n-1),
         sd_mean = sd/sqrt(n),
         margin_of_error = t_critical*sd_mean,
         ci_lower = mean-margin_of_error,
         ci_higher = mean+margin_of_error)
##   gender min    Q1 median   Q3 max  mean    sd  n missing t_critical sd_mean
## 1 female   0  0.25    3.0 14.0  29  7.38  8.51 26       0       2.06    1.67
## 2   male   1 15.75   19.5 31.2  44 21.12 10.92 24       0       2.07    2.23
##   margin_of_error ci_lower ci_higher
## 1            3.44     3.95      10.8
## 2            4.61    16.52      25.7
  • Since two 95% confidence intervals do not overlap, we are at least 95% confident that the male have more experience than the female on average.
# hypothesis testing using t.test() 
t.test(experience ~ gender, data=omega)
## 
##  Welch Two Sample t-test
## 
## data:  experience by gender
## t = -5, df = 43, p-value = 1e-05
## alternative hypothesis: true difference in means between group female and group male is not equal to 0
## 95 percent confidence interval:
##  -19.35  -8.13
## sample estimates:
## mean in group female   mean in group male 
##                 7.38                21.12
# hypothesis testing using infer package
obs_diff <- omega %>%
  specify(experience ~ gender) %>%
  calculate(stat = "diff in means", order = c("female", "male"))

null_dist <- omega %>%
  # specify variables
  specify(experience ~ gender) %>%
  
  # assume independence, i.e, there is no difference
  hypothesize(null = "independence") %>%
  
  # generate 1000 reps, of type "permute"
  generate(reps = 1000, type = "permute") %>%
  
  # calculate statistic of difference, namely "diff in means"
  calculate(stat = "diff in means", order = c("female", "male"))

null_dist %>% 
  visualise()+
  shade_p_value(obs_stat = obs_diff, direction = "two-sided")

# Warning: Please be cautious in reporting a p-value of 0. This result is an approximation based on the number of `reps` chosen in the `generate()` step.
p_value <- null_dist %>% 
  get_pvalue(obs_stat = obs_diff, direction="both")
p_value
## # A tibble: 1 × 1
##   p_value
##     <dbl>
## 1       0
  • Based on the evidence we can conclude that there is a significant difference between the experience of the male and female executives. This conclusion endangers our previous conclusion that the difference in male and female salaries is based on discrimination.

3.4 Relationship Salary - Experience ?

Someone at the meeting argues that clearly, a more thorough analysis of the relationship between salary and experience is required before any conclusion can be drawn about whether there is any gender-based salary discrimination in the company.

Next we analyse the relationship between salary and experience. Here is a scatterplot to visually inspect the data

omega %>%  
  ggplot(aes(x=experience, y=salary)) + 
  geom_point()+
  labs(title = "Scatterplot of Experience against Salary")+
  NULL

3.5 Check correlations between the data

We use GGally:ggpairs() to create a scatterplot and correlation matrix. Essentially, we change the order our variables will appear in and have the dependent variable (Y), salary, as last in our list.

omega %>% 
  select(gender, experience, salary) %>% #order variables they will appear in ggpairs()
  ggpairs(aes(colour=gender, alpha = 0.3))+
  theme_bw()

  • We can see from the graphs showing experience vs salary that the correlation equals to 0.803 and is significantly different from 0 with three stars, which means we are 99% confident that experience is positively correlated with salary.

4 Challenge 1: Brexit plot

The original graph:

Our replica:

brexit_results <- read_csv(here::here("data","brexit_results.csv"))

brexit_results %>% 
  # transform the dataframe
  pivot_longer(cols = con_2015:ukip_2015,
               names_to = "party",
               values_to = "Party_percent") %>%
  # rename columns names of the parties
  mutate(party = case_when(
    party == "con_2015" ~ "Conservative",
    party == "lab_2015" ~ "Labour",
    party == "ld_2015" ~ "Lib Dems",
    party == "ukip_2015" ~ "UKIP"
  )) %>% 
  ggplot(aes(x=Party_percent, y=leave_share, color=party))+
  # scatter plot
  geom_point(alpha = 0.5)+
  # linear smooth line
  geom_smooth(method = "lm")+
  # colour the points of each party
  scale_colour_manual(values = c("#0087dc","#D50000","#FDBB30","#EFE600"))+
  # legend settings
  theme(legend.position = "bottom",
        legend.title = element_blank())+
  labs(title="How Political Affiliation Translated to Brexit Voting",
       x="Party % in the UK 2015 general election",
       y="Leave % in the 2015 Brexit referendum")+
  theme_bw()+
  NULL

5 Challenge 2: CDC COVID-19 Public Use Data

The CDC Covid-19 Case Surveillance Data is a case surveillance public use dataset with 12 elements for all COVID-19 cases shared with CDC and includes demographics, any exposure history, disease severity indicators and outcomes, presence of any underlying medical conditions and risk behaviors. The variables can be seen from:

There are well over 28 million entries of individual, and we will work with SQLlite database, rather than a CSV file. We will produce two graphs that show death % rate:

  1. by age group, sex, and whether the patient had co-morbidities or not
  2. by age group, sex, and whether the patient was admited to Intensive Care Unit (ICU) or not.

# more details and examples on connecting to an SQL database can be found at 
# https://mam2022.netlify.app/reference/reference_sql/

# set up a connection to sqlite database. 
# make sure the database file is in your working directory-- 
# put it at the root of am01

my_path <- "/Users/gaoyuan/Downloads/cdc_data.db"

cdc_db <- DBI::dbConnect(
  drv = RSQLite::SQLite(),
  dbname = my_path
)

# browse the tables in the database using DBI::dbListTables()
DBI::dbListTables(cdc_db)
## [1] "cdc"
# We can easily set these tables up as database objects using dplyr
cdc_data <- dplyr::tbl(cdc_db, "cdc")



# ************************
# You need to calculate Covid death % by age group, sex 
# and presence of co-morbidities (query1) and Covid death % 
# by age group, sex and ICU admission (query2) 

# rather than loading the entire file in memory, you will use dplyr+SQL,
# to generate a smaller dataset that you will use for your calculations and
# thn visualisations

query1 <- cdc_data %>%
  filter(death_yn %in% c("Yes","No") &
           !age_group %in% c("Missing","NA") &
           sex %in% c("Male","Female") &
           medcond_yn %in% c("Yes","No")) %>% 
  mutate(medcond_yn = case_when(
    medcond_yn == "Yes" ~ "With Comorbidities",
    medcond_yn == "No" ~ "Without Comorbidities"
  )) %>% 
  group_by(age_group, sex, death_yn, medcond_yn) %>% 
  summarise(count=n())
  
  
# query1 is a list of database connection, operations, sql
class(query1)
## [1] "tbl_SQLiteConnection" "tbl_dbi"              "tbl_sql"             
## [4] "tbl_lazy"             "tbl"
# Generate actual SQL commands: We can either use dbplyr::sql_render() or dplyr::show_query()
dbplyr::sql_render(query1)
## <SQL> SELECT `age_group`, `sex`, `death_yn`, `medcond_yn`, COUNT(*) AS `count`
## FROM (SELECT `cdc_case_earliest_dt`, `cdc_report_dt`, `pos_spec_dt`, `onset_dt`, `current_status`, `sex`, `age_group`, `race_ethnicity_combined`, `hosp_yn`, `icu_yn`, `death_yn`, CASE
## WHEN (`medcond_yn` = 'Yes') THEN ('With Comorbidities')
## WHEN (`medcond_yn` = 'No') THEN ('Without Comorbidities')
## END AS `medcond_yn`
## FROM `cdc`
## WHERE (`death_yn` IN ('Yes', 'No') AND NOT(`age_group` IN ('Missing', 'NA')) AND `sex` IN ('Male', 'Female') AND `medcond_yn` IN ('Yes', 'No')))
## GROUP BY `age_group`, `sex`, `death_yn`, `medcond_yn`
# execute query and retrieve results in a tibble (dataframe). 
query1_tibble <- query1 %>% 
  collect() # collect runs the SQL query and returns the output of your dplyr pipe sequence

# calculate death rate
query1_tibble %>% 
  pivot_wider(names_from = death_yn,values_from=count) %>% 
  mutate(death_rate = Yes/(No+Yes)) %>% 
  # bar plot
  ggplot(aes(x=death_rate,y=age_group))+
  geom_col(fill="steelblue4")+
  # data label
  geom_text(aes(label = round(death_rate*100,1),hjust=0))+
  # 2x2 grid faceting
  facet_grid(medcond_yn ~ sex) +
  # x axis in %
  scale_x_continuous(label=scales::percent_format(accuracy = 1))+
  theme_bw()+
  labs(title = "Covid Death % group by age group, sex and presence of co-morbidities",
       x="",
       y="")+
  NULL

query2 <- cdc_data %>%
  filter(icu_yn %in% c("Yes","No") & 
           !age_group %in% c("Missing","NA") & 
           sex %in% c("Male","Female") &
           death_yn %in% c("Yes","No")) %>% 
  mutate(icu_yn = case_when(
    icu_yn == "Yes" ~ "ICU Admission",
    icu_yn == "No" ~ "No ICU Admission"
  )) %>% 
  group_by(age_group, sex, icu_yn, death_yn) %>% 
  summarise(count=n())

query2_tibble <- query2 %>% 
  collect()

# calculate icu rate
query2_tibble %>% 
  pivot_wider(names_from = death_yn, values_from=count) %>% 
  mutate(death_rate = Yes/(No+Yes))%>% 
  # bar plot
  ggplot(aes(x=death_rate,y=age_group))+
  geom_col(fill="salmon1")+
  # data label
  geom_text(aes(label = round(death_rate*100,1),hjust=0))+
  # 2x2 grid faceting
  facet_grid(icu_yn ~ sex) +
  # x axis in %
  scale_x_continuous(label=scales::percent_format(accuracy = 1))+
  theme_bw()+
  labs(title = "Covid Death % group by age group, sex and ICU admission",
       x="",
       y="")+
  NULL

6 Challenge 3:GDP components over time and among countries

At the risk of oversimplifying things, the main components of gross domestic product, GDP are personal consumption (C), business investment (I), government spending (G) and net exports (exports - imports). More about GDP and the different approaches in calculating could be found at the Wikipedia GDP page.

The GDP data we will look at is from the United Nations’ National Accounts Main Aggregates Database, which contains estimates of total GDP and its components for all countries from 1970 to today. We will look at how GDP and its components have changed over time, and compare different countries and how much each component contributes to that country’s GDP. The file we will work with is GDP and its breakdown at constant 2010 prices in US Dollars and it has already been saved in the Data directory.

By inspecting the Excel file, we looked at the Excel file to see how it is structured and organised by country and by GDP components, showcasing the volume in USD from 1970 to 2017.

UN_GDP_data  <-  read_excel(here::here("data", "Download-GDPconstant-USD-countries.xls"), # Excel filename
                sheet="Download-GDPconstant-USD-countr", # Sheet name
                skip=2) # Number of rows to skip

First we transform the data from a wide format to a long format. Then we tidy the data by expressing all figures in billions, and rename the some indicators into something shorter:

  • Exports of goods and services -> Exports
  • Imports of goods and services -> Imports
  • General government final consumption expenditure -> Government expenditure
  • Gross Domestic Product (GDP) -> Gross Domestic Product
# check unique values of IndicatorName
unique(UN_GDP_data$IndicatorName)
##  [1] "Final consumption expenditure"                                                           
##  [2] "Household consumption expenditure (including Non-profit institutions serving households)"
##  [3] "General government final consumption expenditure"                                        
##  [4] "Gross capital formation"                                                                 
##  [5] "Gross fixed capital formation (including Acquisitions less disposals of valuables)"      
##  [6] "Exports of goods and services"                                                           
##  [7] "Imports of goods and services"                                                           
##  [8] "Gross Domestic Product (GDP)"                                                            
##  [9] "Agriculture, hunting, forestry, fishing (ISIC A-B)"                                      
## [10] "Mining, Manufacturing, Utilities (ISIC C-E)"                                             
## [11] "Manufacturing (ISIC D)"                                                                  
## [12] "Construction (ISIC F)"                                                                   
## [13] "Wholesale, retail trade, restaurants and hotels (ISIC G-H)"                              
## [14] "Transport, storage and communication (ISIC I)"                                           
## [15] "Other Activities (ISIC J-P)"                                                             
## [16] "Total Value Added"                                                                       
## [17] "Changes in inventories"
tidy_GDP_data  <- UN_GDP_data %>% 
  # transform into long format
  pivot_longer(cols = 4:51, names_to = "year", values_to = "volume") %>% 
  # tidy volumn data into billions
  # shorten indicator name
  mutate(volume_bn = volume / 10^9, IndicatorName = case_when (
    IndicatorName == "Household consumption expenditure (including Non-profit institutions serving households)" ~ "Household expenditure",
    IndicatorName == "Exports of goods and services" ~ "Exports",
    IndicatorName == "Imports of goods and services" ~ "Imports",
    IndicatorName == "General government final consumption expenditure" ~ "Government expenditure",
    IndicatorName == "Gross capital formation" ~ "Gross capital formation",
    IndicatorName == "Gross Domestic Product (GDP)" ~ "Gross Domestic Product",
    T~"Others")) %>% 
  #keep only the key GDP components
  filter(IndicatorName!="Others")

glimpse(tidy_GDP_data)
## Rows: 63,072
## Columns: 6
## $ CountryID     <dbl> 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,…
## $ Country       <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanista…
## $ IndicatorName <chr> "Household expenditure", "Household expenditure", "House…
## $ year          <chr> "1970", "1971", "1972", "1973", "1974", "1975", "1976", …
## $ volume        <dbl> 5.07e+09, 4.84e+09, 4.70e+09, 5.21e+09, 5.59e+09, 5.65e+…
## $ volume_bn     <dbl> 5.07, 4.84, 4.70, 5.21, 5.59, 5.65, 5.68, 6.15, 6.30, 6.…
# Let us compare GDP components for these 3 countries
country_list <- c("United States","India", "Greece")
tidy_GDP_country <- tidy_GDP_data %>% 
  filter (Country %in% country_list)

head(tidy_GDP_country)
## # A tibble: 6 × 6
##   CountryID Country IndicatorName         year        volume volume_bn
##       <dbl> <chr>   <chr>                 <chr>        <dbl>     <dbl>
## 1       300 Greece  Household expenditure 1970  62638922061.      62.6
## 2       300 Greece  Household expenditure 1971  66051235689.      66.1
## 3       300 Greece  Household expenditure 1972  70029734547.      70.0
## 4       300 Greece  Household expenditure 1973  74311679937.      74.3
## 5       300 Greece  Household expenditure 1974  73529781615.      73.5
## 6       300 Greece  Household expenditure 1975  79050695436.      79.1

We replicate the chart below:

tidy_GDP_country %>% 
  filter(IndicatorName!="Gross Domestic Product") %>% 
  ggplot(aes(x=as.numeric(year), y=volume_bn, 
             color = IndicatorName)) +
  geom_line() +
  facet_wrap(~Country) +
  theme_bw() +
  scale_colour_manual("Components of GDP",
                      breaks = c("Gross capital formation", "Exports", "Government expenditure", "Household expenditure", "Imports"),
                      values = c("orange","brown","green","sky blue","purple")) +
  labs(title = "GDP components over time", 
       x="",
       y="Billion US$", 
       subtitle="In constant 2010 USD")+
  NULL

Secondly, recall that GDP is the sum of Household Expenditure (Consumption C), Gross Capital Formation (business investment I), Government Expenditure (G) and Net Exports (exports - imports). Even though there is an indicator Gross Domestic Product (GDP) in the dataframe, we would like to calculate it given its components discussed above.

GDP_components <- tidy_GDP_country %>% 
  select(-volume) %>% 
  pivot_wider(names_from = IndicatorName,
              values_from = volume_bn)

# modify column names
colnames(GDP_components) <- gsub(" ","_", colnames(GDP_components) )

# calculate GDP
GDP_components <- GDP_components %>% 
  mutate(GDP = Household_expenditure + 
           Government_expenditure + 
           Gross_capital_formation +
           Exports - Imports)

GDP_diff <- GDP_components %>% 
  mutate(difference = (GDP-Gross_Domestic_Product)/Gross_Domestic_Product)

GDP_diff %>% 
  ggplot(aes(x=difference))+
  geom_histogram(bins=30) +
  geom_vline(aes(xintercept=mean(difference)))+
  labs(title="Histogram of differences of GDP given and calculated by hand")+
  NULL

# mean(GDP_diff$difference) = 0.6%
  • We can see the histogram of difference in the graph above. The mean difference is 0.6%

GDP_components %>% 
  mutate(Household_expenditure = Household_expenditure/GDP,
         Government_expenditure  =  Government_expenditure/GDP,
         Gross_capital_formation = Gross_capital_formation/GDP,
         Net_Exports = (Exports - Imports)/GDP) %>% 
  select(c("CountryID","Country","year",
           "Household_expenditure","Government_expenditure","Gross_capital_formation","Net_Exports")) %>% 
  pivot_longer(4:7,
               names_to = "IndicatorName",
               values_to = "percentage") %>% 
  mutate(IndicatorName = gsub("_"," ",IndicatorName)) %>% 
  ggplot(aes(x=as.numeric(year),y=percentage, 
             color = IndicatorName)) +
  geom_line() +
  facet_wrap(~Country) +
  theme(legend.title = element_blank())+
  theme_bw() +
  scale_colour_manual("Components of GDP",
                      breaks = c("Gross capital formation", "Government expenditure", "Household expenditure", "Net Exports"),
                      values = c("orange","green","sky blue","purple")) +
  labs(title = "GDP and its breakdown at constant 2010 prices in US Dollars",
       x = "",
       y = "proportion",
       caption = "Source: United Nations, https://unstats.un.org/unsd/snaama/Downloads")+
  NULL

From the graphs above, we observe that:

  • For the three countries analyzed, household expenditure makes up the most of Gross Domestic Product. The household expenditures of US and Greece are increasing while that of India is decreasing. The actual amount of household expenditures surges while the proportion is actually quite stable.

  • Net exports makes up the least and are even negative for the most years of all the three countries.

  • There could be negative correlation between Gross capital formation and Net exports.

7 Details

Team Members: Alex Kubbinga, Clara Moreno Sanchez, Jean Huang, Raghav Mehta, Raina Doshi, Yuan Gao

Approximately how much time did you spend on this problem set: 8 hours

What, if anything, gave you the most trouble: NA